11 research outputs found

    UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement Learning Approach

    Full text link
    Autonomous deployment of unmanned aerial vehicles (UAVs) supporting next-generation communication networks requires efficient trajectory planning methods. We propose a new end-to-end reinforcement learning (RL) approach to UAV-enabled data collection from Internet of Things (IoT) devices in an urban environment. An autonomous drone is tasked with gathering data from distributed sensor nodes subject to limited flying time and obstacle avoidance. While previous approaches, learning and non-learning based, must perform expensive recomputations or relearn a behavior when important scenario parameters such as the number of sensors, sensor positions, or maximum flying time, change, we train a double deep Q-network (DDQN) with combined experience replay to learn a UAV control policy that generalizes over changing scenario parameters. By exploiting a multi-layer map of the environment fed through convolutional network layers to the agent, we show that our proposed network architecture enables the agent to make movement decisions for a variety of scenario parameters that balance the data collection goal with flight time efficiency and safety constraints. Considerable advantages in learning efficiency from using a map centered on the UAV's position over a non-centered map are also illustrated.Comment: Code available under https://github.com/hbayerlein/uav_data_harvesting, IEEE Global Communications Conference (GLOBECOM) 202

    Kinematic Model for Fixed-Wing Aircraft with Constrained Roll-Rate

    Get PDF
    The technical report derives a kinematic model of fixed-wing aircraft that is based on constrained roll rate. This new kinematic model can be used for trajectory planning and optimization.National Science Foundation (NSF), CNS-1646383Ope

    Learning to Recharge: UAV Coverage Path Planning through Deep Reinforcement Learning

    Full text link
    Coverage path planning (CPP) is a critical problem in robotics, where the goal is to find an efficient path that covers every point in an area of interest. This work addresses the power-constrained CPP problem with recharge for battery-limited unmanned aerial vehicles (UAVs). In this problem, a notable challenge emerges from integrating recharge journeys into the overall coverage strategy, highlighting the intricate task of making strategic, long-term decisions. We propose a novel proximal policy optimization (PPO)-based deep reinforcement learning (DRL) approach with map-based observations, utilizing action masking and discount factor scheduling to optimize coverage trajectories over the entire mission horizon. We further provide the agent with a position history to handle emergent state loops caused by the recharge capability. Our approach outperforms a baseline heuristic, generalizes to different target zones and maps, with limited generalization to unseen maps. We offer valuable insights into DRL algorithm design for long-horizon problems and provide a publicly available software framework for the CPP problem.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Edge Generation Scheduling for DAG Tasks using Deep Reinforcement Learning

    Full text link
    Directed acyclic graph (DAG) tasks are currently adopted in the real-time domain to model complex applications from the automotive, avionics, and industrial domain that implement their functionalities through chains of intercommunicating tasks. This paper studies the problem of scheduling real-time DAG tasks by presenting a novel schedulability test based on the concept of trivial schedulability. Using this schedulability test, we propose a new DAG scheduling framework (edge generation scheduling -- EGS) that attempts to minimize the DAG width by iteratively generating edges while guaranteeing the deadline constraint. We study how to efficiently solve the problem of generating edges by developing a deep reinforcement learning algorithm combined with a graph representation neural network to learn an efficient edge generation policy for EGS. We evaluate the effectiveness of the proposed algorithm by comparing it with state-of-the-art DAG scheduling heuristics and an optimal mixed-integer linear programming baseline. Experimental results show that the proposed algorithm outperforms the state-of-the-art by requiring fewer processors to schedule the same DAG tasks.Comment: Under revie

    UAV coverage path planning under varying power constraints using deep reinforcement learning

    No full text

    UAV path planning using global and local map information with deep reinforcement learning

    No full text

    A Cyber-Physical Prototyping and Testing Framework to Enable the Rapid Development of UAVs

    No full text
    In this work, a cyber-physical prototyping and testing framework to enable the rapid development of UAVs is conceived and demonstrated. The UAV Development Framework is an extension of the typical iterative engineering design and development process, specifically applied to the rapid development of UAVs. Unlike other development frameworks in the literature, the presented framework allows for iteration throughout the entire development process from design to construction, using a mixture of simulated and real-life testing as well as cross-aircraft development. The framework presented includes low- and high-order methods and tools that can be applied to a broad range of fixed-wing UAVs and can either be combined and executed simultaneously or be executed sequentially. As part of this work, seven novel and enhanced methods and tools were developed that apply to fixed-wing UAVs in the areas of: flight testing, measurement, modeling and emulation, and optimization. A demonstration of the framework to quickly develop an unmanned aircraft for agricultural field surveillance is presented
    corecore